Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 168
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38610510

RESUMO

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Acústica , Som , Emoções
2.
Artigo em Inglês, Chinês | MEDLINE | ID: mdl-38650447

RESUMO

Attention deficit and hyperactive disorder (ADHD) is a chronic neurodevelopmental disorder characterized by inattention, hyperactivity-impulsivity, and working memory deficits. Social dysfunction is one of the major challenges faced by children with ADHD. It's found that children with ADHD perform less well than typically developing children on facial expression recognition (FER) tasks. Generally, children with ADHD have some difficulties in FER, while some researches suggest that they have no significant differences in accuracy of specific emotion recognition with typically developing children. The neuropsychological mechanisms underlying these difficulties are as follows: 1. neuroanatomically, compared to typically developing children, children with ADHD show smaller gray matter volume and surface area in the amygdala and medial prefrontal cortex regions, as well as reduced density and volume of axons/cells in certain frontal white matter fiber tracts; 2. neurophysiologically, children with ADHD exhibit increased slow-wave activity in their electroencephalogram, and event-related potential studies reveal abnormalities in emotional regulation and responses to angry faces when facing facial stimuli; 3. psychologically, psychosocial stressors may influence FER abilities in children with ADHD, and sleep deprivation in ADHD children may significantly increase their recognition threshold for negative expressions such as sadness and anger. This article reviews research progress on the FER abilities of children with ADHD over the past three years, analyzing the FER deficit in children with ADHD from three dimensions: neuroanatomy, neurophysiology and psychology, aiming to provide new perspectives for further research and clinical treatment of ADHD.

3.
BMC Psychiatry ; 24(1): 226, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38532335

RESUMO

BACKGROUND: Patients with schizophrenia (SCZ) exhibit difficulties deficits in recognizing facial expressions with unambiguous valence. However, only a limited number of studies have examined how these patients fare in interpreting facial expressions with ambiguous valence (for example, surprise). Thus, we aimed to explore the influence of emotional background information on the recognition of ambiguous facial expressions in SCZ. METHODS: A 3 (emotion: negative, neutral, and positive) × 2 (group: healthy controls and SCZ) experimental design was adopted in the present study. The experimental materials consisted of 36 images of negative emotions, 36 images of neutral emotions, 36 images of positive emotions, and 36 images of surprised facial expressions. In each trial, a briefly presented surprised face was preceded by an affective image. Participants (36 SCZ and 36 healthy controls (HC)) were required to rate their emotional experience induced by the surprised facial expressions. Participants' emotional experience was measured using the 9-point rating scale. The experimental data have been analyzed by conducting analyses of variances (ANOVAs) and correlation analysis. RESULTS: First, the SCZ group reported a more positive emotional experience under the positive cued condition compared to the negative cued condition. Meanwhile, the HC group reported the strongest positive emotional experience in the positive cued condition, a moderate experience in the neutral cued condition, and the weakest in the negative cue condition. Second, the SCZ (vs. HC) group showed longer reaction times (RTs) for recognizing surprised facial expressions. The severity of schizophrenia symptoms in the SCZ group was negatively correlated with their rating scores for emotional experience under neutral and positive cued condition. CONCLUSIONS: Recognition of surprised facial expressions was influenced by background information in both SCZ and HC, and the negative symptoms in SCZ. The present study indicates that the role of background information should be fully considered when examining the ability of SCZ to recognize ambiguous facial expressions.


Assuntos
Reconhecimento Facial , Esquizofrenia , Humanos , Emoções , Reconhecimento Psicológico , Expressão Facial , China
4.
Neural Netw ; 170: 337-348, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38006736

RESUMO

Facial expression recognition (FER) in the wild is challenging due to the disturbing factors including pose variation, occlusions, and illumination variation. The attention mechanism can relieve these issues by enhancing expression-relevant information and suppressing expression-irrelevant information. However, most methods utilize the same attention mechanism on feature tensors with varying spatial and channel sizes across different network layers, disregarding the dynamically changing sizes of these tensors. To solve this issue, this paper proposes a hierarchical attention network with progressive feature fusion for FER. Specifically, first, to aggregate diverse complementary features, a diverse feature extraction module based on several feature aggregation blocks is designed to exploit both local context and global context features, both low-level and high-level features, as well as the gradient features that are robust to illumination variation. Second, to effectively fuse the above diverse features, a hierarchical attention module (HAM) is designed to progressively enhance discriminative features from key parts of the facial images and suppress task-irrelevant features from disturbing facial regions. Extensive experiments show that our model achieves the best performance among existing FER methods.


Assuntos
Reconhecimento Facial , Face , Iluminação , Expressão Facial
5.
Sensors (Basel) ; 23(24)2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38139503

RESUMO

Facial expression recognition is crucial for understanding human emotions and nonverbal communication. With the growing prevalence of facial recognition technology and its various applications, accurate and efficient facial expression recognition has become a significant research area. However, most previous methods have focused on designing unique deep-learning architectures while overlooking the loss function. This study presents a new loss function that allows simultaneous consideration of inter- and intra-class variations to be applied to CNN architecture for facial expression recognition. More concretely, this loss function reduces the intra-class variations by minimizing the distances between the deep features and their corresponding class centers. It also increases the inter-class variations by maximizing the distances between deep features and their non-corresponding class centers, and the distances between different class centers. Numerical results from several benchmark facial expression databases, such as Cohn-Kanade Plus, Oulu-Casia, MMI, and FER2013, are provided to prove the capability of the proposed loss function compared with existing ones.


Assuntos
Reconhecimento Facial , Redes Neurais de Computação , Humanos , Algoritmos , Expressão Facial , Emoções
6.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37896470

RESUMO

Facial expression recognition (FER) poses a complex challenge due to diverse factors such as facial morphology variations, lighting conditions, and cultural nuances in emotion representation. To address these hurdles, specific FER algorithms leverage advanced data analysis for inferring emotional states from facial expressions. In this study, we introduce a universal validation methodology assessing any FER algorithm's performance through a web application where subjects respond to emotive images. We present the labelled data database, FeelPix, generated from facial landmark coordinates during FER algorithm validation. FeelPix is available to train and test generic FER algorithms, accurately identifying users' facial expressions. A testing algorithm classifies emotions based on FeelPix data, ensuring its reliability. Designed as a computationally lightweight solution, it finds applications in online systems. Our contribution improves facial expression recognition, enabling the identification and interpretation of emotions associated with facial expressions, offering profound insights into individuals' emotional reactions. This contribution has implications for healthcare, security, human-computer interaction, and entertainment.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Emoções , Face , Expressão Facial
7.
Front Neurosci ; 17: 1280831, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37736267
8.
Front Neurorobot ; 17: 1250706, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37663762

RESUMO

Recognizing occluded facial expressions in the wild poses a significant challenge. However, most previous approaches rely solely on either global or local feature-based methods, leading to the loss of relevant expression features. To address these issues, a feature fusion residual attention network (FFRA-Net) is proposed. FFRA-Net consists of a multi-scale module, a local attention module, and a feature fusion module. The multi-scale module divides the intermediate feature map into several sub-feature maps in an equal manner along the channel dimension. Then, a convolution operation is applied to each of these feature maps to obtain diverse global features. The local attention module divides the intermediate feature map into several sub-feature maps along the spatial dimension. Subsequently, a convolution operation is applied to each of these feature maps, resulting in the extraction of local key features through the attention mechanism. The feature fusion module plays a crucial role in integrating global and local expression features while also establishing residual links between inputs and outputs to compensate for the loss of fine-grained features. Last, two occlusion expression datasets (FM_RAF-DB and SG_RAF-DB) were constructed based on the RAF-DB dataset. Extensive experiments demonstrate that the proposed FFRA-Net achieves excellent results on four datasets: FM_RAF-DB, SG_RAF-DB, RAF-DB, and FERPLUS, with accuracies of 77.87%, 79.50%, 88.66%, and 88.97%, respectively. Thus, the approach presented in this paper demonstrates strong applicability in the context of occluded facial expression recognition (FER).

9.
J Neurol ; 270(12): 5731-5755, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37672106

RESUMO

Deficits in social cognition may be present in frontotemporal dementia (FTD) and Alzheimer's disease (AD). Here, we conduct a qualitative synthesis and meta-analysis of facial expression recognition studies in which we compare the deficits between both disorders. Furthermore, we investigate the specificity of the deficit regarding phenotypic variant, domain-specificity, emotion category, task modality, and geographical region. The results reveal that both FTD and AD are associated with facial expression recognition deficits, that this deficit is more pronounced in FTD compared to AD and that this applies for the behavioral as well as for language FTD-variants, with no difference between the latter two. In both disorders, overall emotion recognition was most frequently impaired, followed by recognition of anger in FTD and by fear in AD. Verbal categorization was the most frequently used task, although matching or intensity rating tasks may be more specific. Studies from Oceania revealed larger deficits. On the other hand, non-emotional control tasks were more impacted by AD than by FTD. The present findings sharpen the social cognitive phenotype of FTD and AD, and support the use of social cognition assessment in late-life neuropsychiatric disorders.


Assuntos
Doença de Alzheimer , Reconhecimento Facial , Demência Frontotemporal , Humanos , Doença de Alzheimer/psicologia , Demência Frontotemporal/psicologia , Emoções , Fenótipo , Testes Neuropsicológicos , Expressão Facial
10.
Affect Sci ; 4(3): 500-505, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37744972

RESUMO

Facial expression recognition software is becoming more commonly used by affective scientists to measure facial expressions. Although the use of this software has exciting implications, there are persistent and concerning issues regarding the validity and reliability of these programs. In this paper, we highlight three of these issues: biases of the programs against certain skin colors and genders; the common inability of these programs to capture facial expressions made in non-idealized conditions (e.g., "in the wild"); and programs being forced to adopt the underlying assumptions of the specific theory of emotion on which each software is based. We then discuss three directions for the future of affective science in the area of automated facial coding. First, researchers need to be cognizant of exactly how and on which data sets the machine learning algorithms underlying these programs are being trained. In addition, there are several ethical considerations, such as privacy and data storage, surrounding the use of facial expression recognition programs. Finally, researchers should consider collecting additional emotion data, such as body language, and combine these data with facial expression data in order to achieve a more comprehensive picture of complex human emotions. Facial expression recognition programs are an excellent method of collecting facial expression data, but affective scientists should ensure that they recognize the limitations and ethical implications of these programs.

11.
Sensors (Basel) ; 23(16)2023 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-37631685

RESUMO

In recent years, convolutional neural networks (CNNs) have played a dominant role in facial expression recognition. While CNN-based methods have achieved remarkable success, they are notorious for having an excessive number of parameters, and they rely on a large amount of manually annotated data. To address this challenge, we expand the number of training samples by learning expressions from a face recognition dataset to reduce the impact of a small number of samples on the network training. In the proposed deep joint learning framework, the deep features of the face recognition dataset are clustered, and simultaneously, the parameters of an efficient CNN are learned, thereby marking the data for network training automatically and efficiently. Specifically, first, we develop a new efficient CNN based on the proposed affinity convolution module with much lower computational overhead for deep feature learning and expression classification. Then, we develop an expression-guided deep facial clustering approach to cluster the deep features and generate abundant expression labels from the face recognition dataset. Finally, the AC-based CNN is fine-tuned using an updated training set and a combined loss function. Our framework is evaluated on several challenging facial expression recognition datasets as well as a self-collected dataset. In the context of facial expression recognition applied to the field of education, our proposed method achieved an impressive accuracy of 95.87% on the self-collected dataset, surpassing other existing methods.


Assuntos
Reconhecimento Facial , Aprendizagem , Análise por Conglomerados , Face , Redes Neurais de Computação
12.
BMC Psychol ; 11(1): 237, 2023 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-37592360

RESUMO

BACKGROUND: Emotional cognitive impairment is a core phenotype of the clinical symptoms of psychiatric disorders. The ability to measure emotional cognition is useful for assessing neurodegenerative conditions and treatment responses. However, certain factors such as culture, gender, and generation influence emotional recognition, and these differences require examination. We investigated the characteristics of healthy young Japanese adults with respect to facial expression recognition. METHODS: We generated 17 models of facial expressions for each of the six basic emotions (happiness, sadness, anger, fear, disgust, and surprise) at three levels of emotional intensity using the Facial Acting Coding System (FACS). Thirty healthy Japanese young adults evaluated the type of emotion and emotional intensity the models represented to them. RESULTS: Assessment accuracy for all emotions, except fear, exceeded 60% in approximately half of the videos. Most facial expressions of fear were rarely accurately recognized. Gender differences were observed with respect to both faces and participants, indicating that expressions on female faces were more recognizable than those on male faces, and female participants had more accurate perceptions of facial emotions than males. CONCLUSION: The videos used may constitute a dataset, with the possible exception of those that represent fear. The subject's ability to recognize the type and intensity of emotions was affected by the gender of the portrayed face and the evaluator's gender. These gender differences must be considered when developing a scale of facial expression recognition.


Assuntos
População do Leste Asiático , Expressão Facial , Medo , Percepção Visual , Feminino , Humanos , Masculino , Ira , Emoções , Voluntários Saudáveis , Adulto Jovem , Fatores Sexuais
13.
Sensors (Basel) ; 23(15)2023 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-37571582

RESUMO

Facial expressions help individuals convey their emotions. In recent years, thanks to the development of computer vision technology, facial expression recognition (FER) has become a research hotspot and made remarkable progress. However, human faces in real-world environments are affected by various unfavorable factors, such as facial occlusion and head pose changes, which are seldom encountered in controlled laboratory settings. These factors often lead to a reduction in expression recognition accuracy. Inspired by the recent success of transformers in many computer vision tasks, we propose a model called the fine-tuned channel-spatial attention transformer (FT-CSAT) to improve the accuracy of recognition of FER in the wild. FT-CSAT consists of two crucial components: channel-spatial attention module and fine-tuning module. In the channel-spatial attention module, the feature map is input into the channel attention module and the spatial attention module sequentially. The final output feature map will effectively incorporate both channel information and spatial information. Consequently, the network becomes adept at focusing on relevant and meaningful features associated with facial expressions. To further improve the model's performance while controlling the number of excessive parameters, we employ a fine-tuning method. Extensive experimental results demonstrate that our FT-CSAT outperforms the state-of-the-art methods on two benchmark datasets: RAF-DB and FERPlus. The achieved recognition accuracy is 88.61% and 89.26%, respectively. Furthermore, to evaluate the robustness of FT-CSAT in the case of facial occlusion and head pose changes, we take tests on Occlusion-RAF-DB and Pose-RAF-DB data sets, and the results also show that the superior recognition performance of the proposed method under such conditions.


Assuntos
Reconhecimento Facial , Humanos , Benchmarking , Fontes de Energia Elétrica , Emoções , Laboratórios , Expressão Facial
14.
Artigo em Inglês | MEDLINE | ID: mdl-37625644

RESUMO

Facial emotion (or expression) recognition (FER) is a domain of affective cognition impaired across various psychiatric conditions, including bipolar disorder (BD). We conducted a systematic review and meta-analysis searching for eligible articles published from inception to April 26, 2023, in PubMed/MEDLINE, Scopus, EMBASE, and PsycINFO to examine whether and to what extent FER would differ between people with BD and those with other mental disorders. Thirty-three studies comparing 1506 BD patients with 1973 clinical controls were included in the present systematic review, and twenty-six of them were analyzed in random-effects meta-analyses exploring the discrepancies in discriminating or identifying emotional stimuli at a general and specific level. Individuals with BD were more accurate in identifying each type of emotion during a FER task compared to individuals diagnosed with schizophrenia (SCZ) (SMD = 0.27; p-value = 0.006), with specific differences in the perception of anger (SMD = 0.46; p-value = 1.19e-06), fear (SMD = 0.38; p-value = 8.2e-04), and sadness (SMD = 0.33; p-value = 0.026). In contrast, BD patients were less accurate than individuals with major depressive disorder (MDD) in identifying each type of emotion (SMD = -0.24; p-value = 0.014), but these differences were more specific for sad emotional stimuli (SMD = -0.31; p-value = 0.009). No significant differences were observed when BD was compared with children and adolescents diagnosed with attention-deficit/hyperactivity disorder. FER emerges as a potential integrative instrument for guiding diagnosis by enabling discrimination between BD and SCZ or MDD. Enhancing the standardization of adopted tasks could further enhance the accuracy of this tool, leveraging FER potential as a therapeutic target.


Assuntos
Transtorno Bipolar , Transtorno Depressivo Maior , Reconhecimento Facial , Adolescente , Criança , Humanos , Emoções , Ira
15.
Ophthalmic Physiol Opt ; 43(6): 1344-1355, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37392062

RESUMO

PURPOSE: To investigate the effect of low luminance on face recognition, specifically facial identity discrimination (FID) and facial expression recognition (FER), in adults with central vision loss (CVL) and peripheral vision loss (PVL) and to explore the association between clinical vision measures and low luminance FID and FER. METHODS: Participants included 33 adults with CVL, 17 with PVL and 20 controls. FID and FER were assessed under photopic and low luminance conditions. For the FID task, 12 sets of three faces with neutral expressions were presented and participants asked to indicate the odd-face-out. For FER, 12 single faces were presented and participants asked to name the expression (neutral, happy or angry). Photopic and low luminance visual acuity (VA) and contrast sensitivity (CS) were recorded for all participants and for the PVL group, Humphrey Field Analyzer (HFA) 24-2 mean deviation (MD). RESULTS: FID accuracy in CVL, and to a lesser extent PVL, was reduced under low compared with photopic luminance (mean reduction 20% and 8% respectively; p < 0.001). FER accuracy was reduced only in CVL (mean reduction 25%; p < 0.001). For both CVL and PVL, low luminance and photopic VA and CS were moderately to strongly correlated with low luminance FID (ρ = 0.61-0.77, p < 0.05). For PVL, better eye HFA 24-2 MD was moderately correlated with low luminance FID (ρ = 0.54, p = 0.02). Results were similar for low luminance FER. Together, photopic VA and CS explained 75% of the variance in low luminance FID, and photopic VA explained 61% of the variance in low luminance FER. Low luminance vision measures explained little additional variance. CONCLUSION: Low luminance significantly reduced face recognition, particularly for adults with CVL. Worse VA and CS were associated with reduced face recognition. Clinically, photopic VA is a good predictor of face recognition under low luminance conditions.

16.
Healthcare (Basel) ; 11(13)2023 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-37444669

RESUMO

Facial expression recognition technology has been utilized both for entertainment purposes and as a valuable aid in rehabilitation and facial exercise assistance. This technology leverages artificial intelligence models to predict facial landmark points and provide visual feedback, thereby facilitating users' facial movements. However, feedback designs that disregard user preferences may cause discomfort and diminish the benefits of exercise. This study aimed to develop a feedback design guide for facial rehabilitation exercises by investigating user responses to various feedback design methods. We created a facial recognition mobile application and designed six feedback variations based on shape and transparency. To evaluate user experience, we conducted a usability test involving 48 participants (24 subjects in their 20s and 24 over 60 years of age), assessing factors such as feedback, assistance, disturbance, aesthetics, cognitive ease, and appropriateness. The experimental results revealed significant differences in transparency, age, and the interaction between transparency and age. Consequently, it is essential to consider both transparency and user age when designing facial recognition feedback. The findings of this study could potentially inform the design of more effective and personalized visual feedback for facial motion, ultimately benefiting users in rehabilitation and exercise contexts.

17.
Cogn Neurodyn ; 17(4): 985-1008, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37522034

RESUMO

Facial Expression Recognition (FER) is the basis for many applications including human-computer interaction and surveillance. While developing such applications, it is imperative to understand human emotions for better interaction with machines. Among many FER models developed so far, Ensemble Stacked Convolution Neural Networks (ES-CNN) showed an empirical impact in improving the performance of FER on static images. However, the existing ES-CNN based FER models trained with features extracted from the entire face, are unable to address the issues of ambient parameters such as pose, illumination, occlusions. To mitigate the problem of reduced performance of ES-CNN on partially occluded faces, a Component based ES-CNN (CES-CNN) is proposed. CES-CNN applies ES-CNN on action units of individual face components such as eyes, eyebrows, nose, cheek, mouth, and glabella as one subnet of the network. Max-Voting based ensemble classifier is used to ensemble the decisions of the subnets in order to obtain the optimized recognition accuracy. The proposed CES-CNN is validated by conducting experiments on benchmark datasets and the performance is compared with the state-of-the-art models. It is observed from the experimental results that the proposed model has a significant enhancement in the recognition accuracy compared to the existing models.

18.
PeerJ Comput Sci ; 9: e1388, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37409079

RESUMO

An important task in automatic facial expression recognition (FER) is to describe facial image features effectively and efficiently. Facial expression descriptors must be robust to variable scales, illumination changes, face view, and noise. This article studies the application of spatially modified local descriptors to extract robust features for facial expressions recognition. The experiments are carried out in two phases: firstly, we motivate the need for face registration by comparing the extraction of features from registered and non-registered faces, and secondly, four local descriptors (Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP), Compound Local Binary Patterns (CLBP), and Weber's Local Descriptor (WLD)) are optimized by finding the best parameter values for their extraction. Our study reveals that face registration is an important step that can improve the recognition rate of FER systems. We also highlight that a suitable parameter selection can increase the performance of existing local descriptors as compared with state-of-the-art approaches.

19.
PeerJ Comput Sci ; 9: e1216, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346544

RESUMO

Automatic facial expression recognition (FER) plays a crucial role in human-computer based applications such as psychiatric treatment, classroom assessment, surveillance systems, and many others. However, automatic FER is challenging in real-time environment. The traditional methods used handcrafted methods for FER but mostly failed to produce superior results in the wild environment. In this regard, a deep learning-based FER approach with minimal parameters is proposed, which gives better results for lab-controlled and wild datasets. The method uses features boosting module with skip connections which help to focus on expression-specific features. The proposed approach is applied to FER-2013 (wild dataset), JAFFE (lab-controlled), and CK+ (lab-controlled) datasets which achieve accuracy of 70.21%, 96.16%, and 96.52%. The observed experimental results demonstrate that the proposed method outperforms the other related research concerning accuracy and time.

20.
Neural Comput Appl ; 35(20): 14963-14972, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37274419

RESUMO

Automatic facial expression recognition (AFER), sometimes referred to as emotional recognition, is important for socializing. Automatic methods in the past two years faced challenges due to Covid-19 and the vital wearing of a mask. Machine learning techniques tremendously increase the amount of data processed and achieved good results in such AFER to detect emotions; however, those techniques are not designed for masked faces and thus achieved poor recognition. This paper introduces a hybrid convolutional neural network aided by a local binary pattern to extract features in an accurate way, especially for masked faces. The basic seven emotions classified into anger, happiness, sadness, surprise, contempt, disgust, and fear are to be recognized. The proposed method is applied on two datasets: the first represents CK and CK +, while the second represents M-LFW-FER. Obtained results show that emotion recognition with a face mask achieved an accuracy of 70.76% on three emotions. Results are compared to existing techniques and show significant improvement.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...